33 research outputs found

    Dependence of the spike-triggered average voltage on membrane response properties

    Get PDF
    The spike-triggered average voltage (STV) is an experimentally measurable quantity that is determined by both the membrane response properties and the statistics of the synaptic drive. Here, the form of the STV is modelled for neurons with three distinct types of subthreshold dynamics; passive decay, h-current sag, and damped oscillations. Analytical expressions for the STV are first obtained in the low-noise limit, identifying how the subthreshold dynamics of the cell affect its form. A second result is then derived that captures the power-law behaviour of the STV near the spike threshold

    Coherent response of the Hodgkin-Huxley neuron in the high-input regime

    Full text link
    We analyze the response of the Hodgkin-Huxley neuron to a large number of uncorrelated stochastic inhibitory and excitatory post-synaptic spike trains. In order to clarify the various mechanisms responsible for noise-induced spike triggering we examine the model in its silent regime. We report the coexistence of two distinct coherence resonances: the first one at low noise is due to the stimulation of "correlated" subthreshold oscillations; the second one at intermediate noise variances is instead related to the regularization of the emitted spike trains.Comment: 5 pages - 5 eps figures, contribution presented to the conference CNS 2006 held in Edinburgh (UK), to appear on Neurocomputin

    Intrinsic gain modulation and adaptive neural coding

    Get PDF
    In many cases, the computation of a neural system can be reduced to a receptive field, or a set of linear filters, and a thresholding function, or gain curve, which determines the firing probability; this is known as a linear/nonlinear model. In some forms of sensory adaptation, these linear filters and gain curve adjust very rapidly to changes in the variance of a randomly varying driving input. An apparently similar but previously unrelated issue is the observation of gain control by background noise in cortical neurons: the slope of the firing rate vs current (f-I) curve changes with the variance of background random input. Here, we show a direct correspondence between these two observations by relating variance-dependent changes in the gain of f-I curves to characteristics of the changing empirical linear/nonlinear model obtained by sampling. In the case that the underlying system is fixed, we derive relationships relating the change of the gain with respect to both mean and variance with the receptive fields derived from reverse correlation on a white noise stimulus. Using two conductance-based model neurons that display distinct gain modulation properties through a simple change in parameters, we show that coding properties of both these models quantitatively satisfy the predicted relationships. Our results describe how both variance-dependent gain modulation and adaptive neural computation result from intrinsic nonlinearity.Comment: 24 pages, 4 figures, 1 supporting informatio

    Stimulus-dependent maximum entropy models of neural population codes

    Get PDF
    Neural populations encode information about their stimulus in a collective fashion, by joint activity patterns of spiking and silence. A full account of this mapping from stimulus to neural activity is given by the conditional probability distribution over neural codewords given the sensory input. To be able to infer a model for this distribution from large-scale neural recordings, we introduce a stimulus-dependent maximum entropy (SDME) model---a minimal extension of the canonical linear-nonlinear model of a single neuron, to a pairwise-coupled neural population. The model is able to capture the single-cell response properties as well as the correlations in neural spiking due to shared stimulus and due to effective neuron-to-neuron connections. Here we show that in a population of 100 retinal ganglion cells in the salamander retina responding to temporal white-noise stimuli, dependencies between cells play an important encoding role. As a result, the SDME model gives a more accurate account of single cell responses and in particular outperforms uncoupled models in reproducing the distributions of codewords emitted in response to a stimulus. We show how the SDME model, in conjunction with static maximum entropy models of population vocabulary, can be used to estimate information-theoretic quantities like surprise and information transmission in a neural population.Comment: 11 pages, 7 figure

    Neural Decision Boundaries for Maximal Information Transmission

    Get PDF
    We consider here how to separate multidimensional signals into two categories, such that the binary decision transmits the maximum possible information transmitted about those signals. Our motivation comes from the nervous system, where neurons process multidimensional signals into a binary sequence of responses (spikes). In a small noise limit, we derive a general equation for the decision boundary that locally relates its curvature to the probability distribution of inputs. We show that for Gaussian inputs the optimal boundaries are planar, but for non-Gaussian inputs the curvature is nonzero. As an example, we consider exponentially distributed inputs, which are known to approximate a variety of signals from natural environment.Comment: 5 pages, 3 figure

    Thalamic neuron models encode stimulus information by burst-size modulation

    Get PDF
    Thalamic neurons have been long assumed to fire in tonic mode during perceptive states, and in burst mode during sleep and unconsciousness. However, recent evidence suggests that bursts may also be relevant in the encoding of sensory information. Here, we explore the neural code of such thalamic bursts. In order to assess whether the burst code is generic or whether it depends on the detailed properties of each bursting neuron, we analyzed two neuron models incorporating different levels of biological detail. One of the models contained no information of the biophysical processes entailed in spike generation, and described neuron activity at a phenomenological level. The second model represented the evolution of the individual ionic conductances involved in spiking and bursting, and required a large number of parameters. We analyzed the models' input selectivity using reverse correlation methods and information theory. We found that n-spike bursts from both models transmit information by modulating their spike count in response to changes to instantaneous input features, such as slope, phase, amplitude, etc. The stimulus feature that is most efficiently encoded by bursts, however, need not coincide with one of such classical features. We therefore searched for the optimal feature among all those that could be expressed as a linear transformation of the time-dependent input current. We found that bursting neurons transmitted 6 times more information about such more general features. The relevant events in the stimulus were located in a time window spanning ~100 ms before and ~20 ms after burst onset. Most importantly, the neural code employed by the simple and the biologically realistic models was largely the same, implying that the simple thalamic neuron model contains the essential ingredients that account for the computational properties of the thalamic burst code. Thus, our results suggest the n-spike burst code is a general property of thalamic neurons

    From Spiking Neuron Models to Linear-Nonlinear Models

    Get PDF
    Neurons transform time-varying inputs into action potentials emitted stochastically at a time dependent rate. The mapping from current input to output firing rate is often represented with the help of phenomenological models such as the linear-nonlinear (LN) cascade, in which the output firing rate is estimated by applying to the input successively a linear temporal filter and a static non-linear transformation. These simplified models leave out the biophysical details of action potential generation. It is not a priori clear to which extent the input-output mapping of biophysically more realistic, spiking neuron models can be reduced to a simple linear-nonlinear cascade. Here we investigate this question for the leaky integrate-and-fire (LIF), exponential integrate-and-fire (EIF) and conductance-based Wang-Buzsáki models in presence of background synaptic activity. We exploit available analytic results for these models to determine the corresponding linear filter and static non-linearity in a parameter-free form. We show that the obtained functions are identical to the linear filter and static non-linearity determined using standard reverse correlation analysis. We then quantitatively compare the output of the corresponding linear-nonlinear cascade with numerical simulations of spiking neurons, systematically varying the parameters of input signal and background noise. We find that the LN cascade provides accurate estimates of the firing rates of spiking neurons in most of parameter space. For the EIF and Wang-Buzsáki models, we show that the LN cascade can be reduced to a firing rate model, the timescale of which we determine analytically. Finally we introduce an adaptive timescale rate model in which the timescale of the linear filter depends on the instantaneous firing rate. This model leads to highly accurate estimates of instantaneous firing rates

    Put to the Test: For a New Sociology of Testing

    Get PDF
    In an age defined by computational innovation, testing seems to have become ubiquitous, and tests are routinely deployed as a form of governance, a marketing device, an instrument for political intervention, and an everyday practice to evaluate the self. This essay argues that something more radical is happening here than simply attempts to move tests from the laboratory into social settings. The challenge that a new sociology of testing must address is that ubiquitous testing changes the relations between science, engineering and sociology: Engineering is today in the very stuff of where society happens. It is not that the tests of 21st Century engineering occur within a social context but that it is the very fabric of the social that is being put to the test. To understand how testing and the social relate today, we must investigate how testing operates on social life, through the modification of its settings. One way to clarify the difference is to say that the new forms of testing can be captured neither within the logic of the field test nor of the controlled experiment. Whereas tests once happened inside social environments, today’s tests directly and deliberately modify the social environment

    Art in the Age of Machine Intelligence

    No full text
    In this wide‐ranging essay, the leader of Google’s Seattle AI group and founder of the Artists and Machine Intelligence program discusses the long‐standing and complex relationship between art and technology. The transformation of artistic practice and theory that attended the 19th century photographic revolution is explored as a parallel for the current revolution in machine intelligence, which promises not only to mechanize (or democratize) the means of reproduction, but also of production
    corecore